Art. 1
Subject Matter & Objectives
General
Summary
Establishes the AI Act's purpose: to lay down harmonised rules for AI systems placed on or used in the EU market, ensuring safety and respect for fundamental rights.
✓ Use Cases
- An EU company developing an AI tool for document review knows they must comply before market placement
- A US company exporting an AI hiring tool to Germany is made subject to these rules
- A hospital deploying a diagnostic AI confirms it is in scope before procurement
✗ Violations
- Claiming a product is "just software" to escape AI Act scope when it uses ML models
- A non-EU developer ignoring the regulation because they are based outside the EU
Art. 2
Scope of Application
General
Summary
Defines who is covered: providers, deployers, importers, distributors, and manufacturers placing AI systems in the EU market or affecting EU persons, regardless of where they are established.
✓ Use Cases
- A UK AI startup selling to EU clients is captured under extraterritorial scope
- An open-source AI framework used commercially in the EU is in scope for deployers
- EU military AI systems are explicitly excluded and handled separately
✗ Violations
- A US firm deploying facial recognition in EU airports claiming extraterritorial exemption
- A deployer misclassifying as a "distributor" to avoid provider-level obligations
Art. 3
Definitions
General
Summary
Defines 65+ key terms including "AI system," "provider," "deployer," "general-purpose AI model," "high-risk AI system," and "intended purpose."
✓ Use Cases
- A regulator uses Art. 3's definition to determine whether a rule-based system qualifies as an "AI system"
- A company determines they are a "deployer" (not provider) because they use a third-party AI system
- Legal teams use the "general-purpose AI" definition to understand GPT-4 class model obligations
✗ Violations
- Misclassifying a system that adapts outputs based on training as "non-AI" to avoid compliance
- Calling yourself a "distributor" to evade provider obligations when you substantially modify an AI system
Art. 5
Prohibited AI Practices
Unacceptable
Summary
Absolute prohibitions — no exceptions — on AI systems that: use subliminal manipulation, exploit vulnerabilities, perform biometric categorisation by sensitive attributes, conduct untargeted facial scraping, assess social scoring, or exploit emotions in workplace/education contexts.
✓ Permitted Uses
- Real-time biometric ID by law enforcement with prior judicial authorisation in serious crime cases
- Post-hoc biometric analysis of a recorded crime scene by police with authorisation
- Emotion detection for safety-critical workplace monitoring with worker consent and oversight
✗ Violations
- A retail chain scraping social media images to build a customer emotion database
- A government deploying social credit scoring to restrict citizens' access to services
- An insurer using subliminal audio cues in claims calls to influence customer decisions
- A school deploying emotion AI to profile students' attentiveness without consent
- Real-time facial recognition in public spaces without law enforcement justification
Art. 6
Classification of High-Risk AI
High Risk
Summary
An AI system is high-risk if it is used as a safety component of regulated products (Annex I), or falls in Annex III categories including biometrics, education, employment, credit scoring, law enforcement, migration, and justice.
✓ Use Cases
- An AI CV-screening tool used in hiring decisions is classified as high-risk (Annex III)
- An AI medical device component triggers both MDR and AI Act obligations
- An AI credit-scoring model used for mortgage decisions is high-risk
- AI in autonomous vehicles is high-risk under Annex I machinery safety
✗ Violations
- Labelling a high-risk hiring AI as "decision support only" without the required governance
- Deploying a biometric access AI in a workplace without Annex III classification assessment
Art. 9
Risk Management System
High Risk
Summary
High-risk AI providers must establish and maintain a continuous risk management system covering identification, estimation, evaluation, and mitigation of known and foreseeable risks — throughout the entire lifecycle.
✓ Use Cases
- A financial AI company documents a risk register with residual risk acceptance for each identified harm
- A healthcare AI provider continuously monitors post-deployment outputs for risk drift
- An AI hiring tool provider updates its risk assessment when adding a new job category
✗ Violations
- Performing a one-time pre-launch risk assessment and never updating it
- Failing to assess risks arising from interaction of the AI system with other systems
- Accepting residual risks without documenting reasoning or testing mitigation measures
Art. 10
Data & Data Governance
High Risk
Summary
Training, validation, and testing data for high-risk AI must meet quality criteria: relevance, representativeness, freedom from errors, completeness, and appropriate consideration of characteristics of the person groups that may be affected.
✓ Use Cases
- A credit AI provider audits training data for racial under-representation before deployment
- A medical AI company validates test data against diverse demographic subgroups
- An NLP provider documents data sources and pre-processing pipelines for each model version
✗ Violations
- Training a recidivism prediction AI on historically biased criminal justice data without bias mitigation
- Using scraped internet data without filtering for known errors, hate speech, or outliers
- Failing to document the proportion of demographic groups in training sets
Art. 11
Technical Documentation
High Risk
Summary
Providers must maintain technical documentation (Annex IV) before market placement: general description, design logic, training methodology, performance metrics, limitations, and testing results.
✓ Use Cases
- A startup documents model architecture, training data, and known limitations in an Annex IV package
- A notified body reviews technical documentation to certify a high-risk AI before CE marking
- An AI provider keeps versioned documentation to trace changes across model updates
✗ Violations
- Providing only a marketing brochure instead of Annex IV technical documentation
- Failing to update technical documentation when a model undergoes substantial modification
Art. 12
Record-Keeping & Logging
High Risk
Summary
High-risk AI systems must automatically log events relevant to identifying risks and post-market monitoring — with logs retained for the period specified in the conformity assessment procedure (minimum 6 months).
✓ Use Cases
- A law enforcement AI logs every query, operator, and output timestamp for audit trails
- A medical AI logs confidence scores alongside decisions for each patient interaction
- An HR AI preserves decision logs for disputed hiring or promotion decisions
✗ Violations
- Deleting AI decision logs after 30 days for storage cost reasons
- Logging outputs but not logging the inputs that triggered each decision
Art. 13
Transparency & Information
High Risk
Summary
High-risk AI systems must be sufficiently transparent for deployers to understand their purpose, capabilities, limitations, accuracy, and the population the AI is intended to serve — documented in instructions for use.
✓ Use Cases
- An AI tool for judges includes documentation noting demographic groups where accuracy drops
- A credit AI's instructions clearly state it should not be used for insurance pricing
- A medical imaging AI documents performance benchmarks across imaging device types
✗ Violations
- Providing "black box" AI to a bank with no documentation on feature importance
- Claiming 99% accuracy without disclosing this applies only to a specific demographic
Art. 14
Human Oversight
High Risk
Summary
High-risk AI systems must be designed so natural persons can effectively oversee them. This includes the ability to understand outputs, disregard/override/stop the system, and not be over-reliant on AI decisions ("automation bias" safeguards).
✓ Use Cases
- A benefits assessment AI always routes final decisions to a human case worker
- A parole AI gives officers an override button with mandatory reason documentation
- An autonomous drone has a remote pilot kill-switch with a dead-man timeout
✗ Violations
- Designing an AI so fast that humans cannot realistically review decisions before they execute
- Presenting AI recommendations as "final decisions" to eliminate perceived human liability
- Training operators only to accept AI outputs and never to question or override them
Art. 15
Accuracy, Robustness & Cybersecurity
High Risk
Summary
High-risk AI must achieve appropriate accuracy levels, be resilient to errors and inconsistencies, and be resistant to adversarial attacks (prompt injection, data poisoning, model evasion) particularly where outputs influence significant decisions.
✓ Use Cases
- A fraud detection AI is tested against adversarial examples before deployment
- A medical AI undergoes robustness testing under corrupted or noisy input conditions
- A credit AI reports confidence intervals alongside each score to flag uncertain decisions
✗ Violations
- Deploying a hiring AI that can be fooled by simple keyword stuffing in CVs
- Using a model that was not tested against adversarial inputs in a law enforcement context
Art. 16–17
Provider Obligations & QMS
High Risk
Summary
Providers must implement a Quality Management System covering: strategy, design, development, testing, deployment monitoring, complaint handling, and corrective actions — documented and subject to audit.
✓ Use Cases
- A medical AI startup builds an ISO 9001-aligned QMS with AI-specific controls
- A fintech documents its model lifecycle from data collection to post-deployment monitoring
- A provider implements a formal change management process for model updates
✗ Violations
- Pushing model updates to production without a documented change assessment
- No formal process for handling user-reported AI errors or bias complaints
Art. 26
Deployer Obligations
High Risk
Summary
Deployers (organisations using high-risk AI) must: use AI per provider instructions, assign human oversight, monitor performance, inform affected natural persons of AI use, and conduct fundamental rights impact assessments before deploying in public bodies or critical sectors.
✓ Use Cases
- A local authority completes a Fundamental Rights Impact Assessment before deploying predictive policing
- An employer notifies workers that an AI monitors productivity and routes disputes to HR
- A bank assigns a responsible officer to review AI credit decisions weekly
✗ Violations
- A public hospital using an AI triage tool without telling patients it influences their care pathway
- A firm using an AI hiring tool outside its documented intended purpose (e.g. for promotion decisions)
Art. 43
Conformity Assessment
High Risk
Summary
Before placing high-risk AI on market, providers must carry out a conformity assessment. For most Annex III systems, this is self-assessment. For biometrics and critical infrastructure, a third-party notified body assessment is mandatory.
✓ Use Cases
- A credit AI undergoes internal conformity assessment with a signed Declaration of Conformity
- A biometric verification system contracts a notified body for third-party review
- An AI provider repeats conformity assessment when making a "substantial modification" to the model
✗ Violations
- Affixing a CE mark without completing the required conformity assessment procedure
- Claiming self-assessment sufficiency for a real-time facial recognition system at a border
Art. 49
EU Declaration of Conformity
High Risk
Summary
The provider must draw up and sign an EU Declaration of Conformity (DoC) before placing a high-risk AI on the market. The DoC must include: provider details, system description, standards applied, notified body reference (if applicable), and a signed statement of compliance.
✓ Use Cases
- An AI diagnostic tool's DoC references Annex IV technical documentation and EN ISO 13485
- A regulator pulls the DoC to verify compliance during a post-market inspection
- A distributor checks the DoC before agreeing to carry a high-risk AI product
✗ Violations
- Signing a Declaration of Conformity without the underlying technical documentation being complete
- Not updating the DoC when the AI system is substantially modified
Art. 50
Transparency for Limited-Risk AI
Limited Risk
Summary
Providers of chatbots, deepfakes, and AI-generated content must ensure users are informed they are interacting with an AI or viewing AI-generated content. Synthetic content must be machine-detectable with watermarking where technically feasible.
✓ Use Cases
- A customer service chatbot displays "You are chatting with an AI assistant" at the start of each session
- A news agency's AI-generated video is watermarked with IPTC C2PA provenance metadata
- A virtual therapist app discloses at onboarding that the therapist is an AI
✗ Violations
- A call centre bot pretending to be a human named "Sarah" without disclosure
- A political campaign releasing AI-generated candidate videos without any disclosure label
- A media company removing AI watermarks from synthetic news images
Art. 51
GPAI Classification
GPAI
Summary
General-purpose AI models are classified by systemic risk. A GPAI model presents systemic risk if trained with compute exceeding 10^25 FLOPs or if the Commission designates it based on capabilities and reach (e.g. GPT-4 class or above).
✓ Use Cases
- OpenAI's GPT-4 and similar frontier models are subject to systemic-risk GPAI obligations
- A smaller open-source model (e.g., 7B parameters, low compute) falls under standard GPAI rules only
- The EU AI Office uses the FLOP threshold to objectively trigger systemic risk review
✗ Violations
- A GPAI provider underreporting training compute to fall below the 10^25 FLOP threshold
- A foundation model API provider failing to notify the AI Office of a new high-compute model release
Art. 53
GPAI Provider Obligations
GPAI
Summary
All GPAI model providers must: maintain technical documentation, publish usage policies, comply with copyright (training data transparency), and register in the EU database. Systemic-risk models additionally require adversarial testing and incident reporting.
✓ Use Cases
- A GPAI provider publishes a detailed model card covering training data, capabilities, and known limitations
- An API provider implements usage policies refusing misuse cases (e.g., CSAM generation)
- A frontier lab conducts red-team adversarial testing and submits results to the AI Office
✗ Violations
- A GPAI provider releasing a model with no model card or documentation of training data sources
- Failing to report a serious incident involving GPAI misuse for critical infrastructure attacks
- Not having copyright clearance processes for training data scraped from the internet
Art. 55
Systemic Risk Obligations
GPAI SR
Summary
Providers of GPAI models with systemic risk must: evaluate models using standardised protocols, perform adversarial testing (red-teaming), track and report serious incidents, ensure cybersecurity protections for the model and infrastructure, and report energy efficiency data.
✓ Use Cases
- Anthropic conducts structured red-teaming before each Claude major release per Art. 55
- A GPAI provider reports a prompt-injection incident that caused harmful output to the EU AI Office within 72 hours
- A lab partners with external safety evaluators for standardised capability benchmarking
✗ Violations
- A frontier model provider not reporting a serious misuse incident that caused financial harm at scale
- Claiming red-teaming was performed without adequate documentation of methodology
Art. 72
Post-Market Monitoring
High Risk
Summary
Providers of high-risk AI must implement a post-market monitoring plan from day one, actively collecting and analysing performance data in deployed environments. The plan feeds into corrective actions and regulators' oversight activities.
✓ Use Cases
- A credit AI provider monitors quarterly accuracy drift across demographic segments post-deployment
- A healthcare AI developer collects clinician feedback reports as part of post-market surveillance
- An AI provider publishes annual post-market performance summaries to market surveillance authorities
✗ Violations
- Considering compliance complete at launch and never revisiting model performance
- Not establishing a feedback loop from deployers to providers about AI system errors
Art. 73
Serious Incident Reporting
High Risk
Summary
Providers must report serious incidents to the market surveillance authority within 15 days (life-threatening: 72 hours). A "serious incident" includes death or serious injury caused by an AI system, or serious infrastructure disruption.
✓ Use Cases
- A hospital AI misdiagnosis causing patient harm triggers a 72-hour incident report
- An AI safety system failure in an autonomous vehicle is reported within 15 days
- A predictive policing error leading to wrongful arrest is escalated as a serious incident
✗ Violations
- A provider concealing an AI-related patient death to avoid regulatory scrutiny
- Reporting after 30 days because the internal review process was too slow
Art. 99
Penalties & Fines
Enforcement
Summary
Prohibited AI practices (Art. 5): up to €35M or 7% of global annual turnover. Other violations: up to €15M or 3% of turnover. Incorrect information to regulators: up to €7.5M or 1.5% turnover. SMEs and startups have proportionally capped fines.
✓ Compliance Incentives
- A startup in a regulatory sandbox benefits from reduced penalties for first-time violations
- Voluntary disclosure of a compliance gap before enforcement can reduce penalty severity
- Demonstrated good-faith QMS and post-market monitoring mitigates financial exposure
✗ Violations Leading to Max Fines
- Deploying an unacceptable-risk AI (social scoring) commercially — €35M / 7% turnover
- A Fortune 500 AI provider with 3% penalty exposure = potentially €600M+ fine
- Providing false information during a market surveillance investigation
Art. 64–67
AI Regulatory Sandboxes
Minimal Risk
Summary
Member States must establish regulatory sandboxes allowing AI development and testing in controlled real-world conditions before market launch, with reduced liability for participating providers if safety protocols are followed.
✓ Use Cases
- A startup tests a healthcare AI on live patient data within a supervised sandbox without full conformity requirements
- A city partners with an AI provider to trial a traffic management system in a designated sandbox zone
- A sandbox participant is shielded from certain penalties while testing a novel high-risk AI
✗ Violations
- Using sandbox status to deploy commercially beyond the approved test boundary
- Not maintaining the required monitoring and documentation even while in the sandbox